technology and society
Is Trust Correlated With Explainability in AI? A Meta-Analysis
This study critically examines the commonly held assumption that explicability in artificial intelligence (AI) systems inherently boosts user trust. Utilizing a meta - analytical approach, we conducted a comprehensive examination of the existing literature t o explore the relationship between AI explainability and trust. Our analysis, incorporating data from 90 studies, reveals a statistically significant but moderate positive correlation between the explainability of AI systems and the trust they engender amo ng users. This indicates that while explainability contributes to building trust, it is not the sole or predominant factor in this equation. In addition to academic contributions to the field of Explainable AI (XAI), this research highlights its broader so cio - technical implications, particularly in promoting accountability and fostering user trust in critical domains such as healthcare and justice. By addressing challenges like algorithmic bias and ethical transparency, the study underscores the need for eq uitable and sustainable AI adoption. Rather than focusing solely on immediate trust, we emphasize the normative importance of fostering authentic and enduring trustworthiness in AI systems.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Ontario > Durham Region > Oshawa (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
AI is the Beginning of the End of Advertising as We Know It
AI (Artificial Intelligence) won't just start appearing one day like an all-knowing computer Genie in a lamp-shaped cloud, but you'll be surprised and amazed at how it is currently and will continue to surface in subtle ways that will change many things including entire industries and how you buy their products and services. Some things we have to purchase to survive in the modern world take research, study, and comparison and are generally hard to get good, accurate, and relevant information on so we end up picking arbitrarily or by copying what people we know did. I'm looking at your auto insurance, cell service, automobiles, and computers to name a few. AI won't be one big thing in our lives, it will be thousands of little things. They won't usually manifest themselves in an all-powerful central role like Alexa or Siri, they will be an invisible army of nameless extras hardly noticeable in the background and yet essential to almost every scene of our lives.
- Banking & Finance > Insurance (0.68)
- Transportation > Passenger (0.55)
AI and Blackness: Towards moving beyond bias and representation
Dancy, Christopher L., Saucier, P. Khalil
In this paper, we argue that AI ethics must move beyond the concepts of race-based representation and bias, and towards those that probe the deeper relations that impact how these systems are designed, developed, and deployed. Many recent discussions on ethical considerations of bias in AI systems have centered on racial bias. We contend that antiblackness in AI requires more of an examination of the ontological space that provides a foundation for the design, development, and deployment of AI systems. We examine what this contention means from the perspective of the sociocultural context in which AI systems are designed, developed, and deployed and focus on intersections with anti-Black racism (antiblackness). To bring these multiple perspectives together and show an example of antiblackness in the face of attempts at de-biasing, we discuss results from auditing an existing open-source semantic network (ConceptNet). We use this discussion to further contextualize antiblackness in design, development, and deployment of AI systems and suggest questions one may ask when attempting to combat antiblackness in AI systems.
- North America > United States > Pennsylvania (0.04)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Trenton (0.04)
- North America > United States > California (0.04)
Algorithms and art: Researchers explore impact of AI on music and culture
Global access to art, culture, and entertainment products – music, movies, books, and more – has undergone fundamental changes over the past 20 years in light of groundbreaking developments in artificial intelligence. For example, users of streaming services like Netflix and Spotify have data collected and analyzed by algorithms to determine their streaming habits – resulting in recommendations that cater to their tastes. But this is only one of the many ways in which AI tools are transforming the arts and culture industries. AI is also being used in the production of music and other art, with algorithms generating photos or writing songs on their own. Warner Music even "signed" an algorithm to a record deal in 2019.
- North America > Canada > Ontario > Toronto (0.16)
- North America > United States > Wisconsin > Dane County > Madison (0.05)
- North America > United States > Indiana > Madison County > Anderson (0.05)
- (2 more...)
- Media > Music (0.90)
- Media > Television (0.56)
The State of AI Ethics Report (Volume 4)
Gupta, Abhishek, Royer, Alexandrine, Wright, Connor, Heath, Victoria, Fancy, Muriam, Ganapini, Marianna Bergamaschi, Egan, Shannon, Sweidan, Masa, Akif, Mo, Butalid, Renjie
The 4th edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since January 2021. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, with a particular focus on four key themes: Ethical AI, Fairness & Justice, Humans & Tech, and Privacy. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Opening the report is a long-form piece by Edward Higgs (Professor of History, University of Essex) titled "AI and the Face: A Historian's View." In it, Higgs examines the unscientific history of facial analysis and how AI might be repeating some of those mistakes at scale. The report also features chapter introductions by Alexa Hagerty (Anthropologist, University of Cambridge), Marianna Ganapini (Faculty Director, Montreal AI Ethics Institute), Deborah G. Johnson (Emeritus Professor, Engineering and Society, University of Virginia), and Soraj Hongladarom (Professor of Philosophy and Director, Center for Science, Technology and Society, Chulalongkorn University in Bangkok). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
- Asia > China (0.67)
- North America > Canada > Quebec > Montreal (0.44)
- North America > United States > Virginia (0.24)
- (35 more...)
- Summary/Review (1.00)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- (4 more...)
- Social Sector (1.00)
- Media > News (1.00)
- Leisure & Entertainment > Sports (1.00)
- (17 more...)
U of T's Schwartz Reisman Institute and AI Global to develop global certification mark for trustworthy AI
The products and services we use in our daily lives have to abide by safety and security standards, from car airbags to construction materials. But no such broad, internationally agreed-upon standards exist for artificial intelligence. And yet, AI tools and technologies are steadily being integrated into all aspects of our lives. AI's potential benefits to humanity, such as improving health-care delivery or tackling climate change, are immense. But potential harms caused by AI tools –from algorithmic bias and labour displacement to risks associated with autonomous vehicles and weapons – risk leading to a lack of trust in AI technologies. To tackle these problems, a new partnership between AI Global, a nonprofit organization focused on advancing responsible and ethical adoption of artificial intelligence, and the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto will create a globally recognized certification mark for the responsible and trusted use of AI systems.
What Mark Zuckerberg Gets Wrong--and Right--About Hate Speech
When he testified before Congress last month, Facebook CEO Mark Zuckerberg discussed the problem of using artificial intelligence to identify online hate speech. He said he was optimistic that in five to 10 years, "We will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we're not just there on that." Brittan Heller (@brittanheller) is director of the Anti-Defamation League's Center for Technology and Society and works with social media companies to reduce cyberhate and online harassment. As an expert on hate speech who recently developed an AI-based system to study online hate, I can confidently say that Zuckerberg is both right and wrong. He is right that AI is not a panacea, since hate speech relies on nuances that algorithms cannot fully detect. At the same time, just because AI does not solve the problem entirely doesn't mean it's useless.
12 Brilliant Women in AI and Ethics to Follow in 2018 - TiEinflect 2018
The possibility of creating sentient machines that can think and act like humans raises many ethical issues. We're already encountering reinforced human bias in AI algorithms and with autonomous "killer" robots looming on the horizon, an open discussion on the perils of unchecked AI is even more imperative. In celebration of Women's History Month, we've highlighted 12 brilliant women leading this much-needed discussion on AI & ethics and development of responsible AI solutions that will benefit everyone. Let me know of any others we should highlight in the comments below or tweet @MiaD #TiEInflect. First person on our list is Joy Buolamwini, founder of Algorithmic Justice League to fight bias in Machine Learning.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.05)
- North America > United States > Texas (0.05)
- (4 more...)
What Our Democracy Needs to Know: A Future Tense Event.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Forget about Silicon Valley, Big Pharma, and our nation's fabled industrial conglomerates--the most formidable science and tech incubator in the history of the world is the U.S. government. But maintaining, nurturing, and continuing to develop our public "knowledge infrastructure" in and around government has become increasingly challenging. In a time of crippling political divisiveness, when the very notion of expertise has come under siege, how can we ensure that our lawmakers and citizens can continue to rely on the robust research, knowledge, and expertise-enhancing infrastructure built over the past two centuries? Join Future Tense and TechCongress on Thursday, Feb. 8, in Washington, D.C. to consider whether our democracy will have the capacity in the future to address such complex issues as genetic engineering, artificial intelligence, and climate change adaptation--and whether our quest for technological and scientific advancement continues to be subjected to sound democratic governance.
- North America > United States > Arizona (0.27)
- North America > United States > District of Columbia > Washington (0.26)
- North America > United States > California (0.26)
When technology and society outpace the law
A self-driving Lexus SUV owned by Google's parent company Alphabet struck a bus February 14 while it was testing on the streets of Mountain View, Calif. SAN FRANCISCO -- The FBI-Apple encryption battle is just the beginning of an important debate this country needs to have about what to do when U.S. innovation outpaces American law. The FBI's failure to get data it wanted from an iPhone used by one of the San Bernardino terrorists -- despite significant help from Apple -- shows that time has arrived once again. As with the coming of the telephone, the car, the radio and TV, the spread of the mobile Internet has gotten ahead of case law. In this case, with hand-held smartphones now ubiquitous, a consumer technology has outstripped the ability of the government to complete an important terrorist investigation.
- North America > United States > California > Santa Clara County > Mountain View (0.28)
- North America > United States > California > San Francisco County > San Francisco (0.26)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Communications > Mobile (0.95)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (0.41)